Goto

Collaborating Authors

 differentiable programming


Response to reviewers for the paper: " On Lazy Training in Differentiable Programming "

Neural Information Processing Systems

We thank the reviewers for their comments and suggestions. Hereafter, we list reviewers' (sometimes paraphrased) Each answer will translate into a clarification in the final version. Reviewer #2 and #3 felt that our message was lacking clarity. A.2). We will add more pointers to their statistical analysis, from the existing literature (e.g. L81-90 in the main paper, often α(m) = 1/ m in these works).


NeuMiss networks: differentiable programming for supervised learning with missing values.

Neural Information Processing Systems

The presence of missing values makes supervised learning much more challenging. Indeed, previous work has shown that even when the response is a linear function of the complete data, the optimal predictor is a complex function of the observed entries and the missingness indicator. As a result, the computational or sample complexities of consistent approaches depend on the number of missing patterns, which can be exponential in the number of dimensions. In this work, we derive the analytical form of the optimal predictor under a linearity assumption and various missing data mechanisms including Missing at Random (MAR) and self-masking (Missing Not At Random). Based on a Neumann-series approximation of the optimal predictor, we propose a new principled architecture, named NeuMiss networks. Their originality and strength come from the use of a new type of non-linearity: the multiplication by the missingness indicator. We provide an upper bound on the Bayes risk of NeuMiss networks, and show that they have good predictive accuracy with both a number of parameters and a computational complexity independent of the number of missing data patterns. As a result they scale well to problems with many features, and remain statistically efficient for medium-sized samples. Moreover, we show that, contrary to procedures using EM or imputation, they are robust to the missing data mechanism, including difficult MNAR settings such as self-masking.


Compiling to recurrent neurons

Velez-Ginorio, Joey, Amin, Nada, Kording, Konrad, Zdancewic, Steve

arXiv.org Artificial Intelligence

Discrete structures are currently second-class in differentiable programming. Since functions over discrete structures lack overt derivatives, differentiable programs do not differentiate through them and limit where they can be used. For example, when programming a neural network, conditionals and iteration cannot be used everywhere; they can break the derivatives necessary for gradient-based learning to work. This limits the class of differentiable algorithms we can directly express, imposing restraints on how we build neural networks and differentiable programs more generally. However, these restraints are not fundamental. Recent work shows conditionals can be first-class, by compiling them into differentiable form as linear neurons. Similarly, this work shows iteration can be first-class -- by compiling to linear recurrent neurons. We present a minimal typed, higher-order and linear programming language with iteration called $\textsf{Cajal}\scriptstyle(\mathbb{\multimap}, \mathbb{2}, \mathbb{N})$. We prove its programs compile correctly to recurrent neurons, allowing discrete algorithms to be expressed in a differentiable form compatible with gradient-based learning. With our implementation, we conduct two experiments where we link these recurrent neurons against a neural network solving an iterative image transformation task. This determines part of its function prior to learning. As a result, the network learns faster and with greater data-efficiency relative to a neural network programmed without first-class iteration. A key lesson is that recurrent neurons enable a rich interplay between learning and the discrete structures of ordinary programming.


Supplementary materials - NeuMiss networks: differentiable programming for supervised learning with missing values A Proofs

Neural Information Processing Systems

Proof of Lemma 2. Identifying the second and first order terms in X we get: The last equality allows to conclude the proof. Additionally, assume that either Assumption 2 or Assumption 3 holds. This concludes the proof according to Lemma 1. Here we establish an auxiliary result, controlling the convergence of Neumann iterates to the matrix inverse. Note that Proposition A.1 can easily be extended to the general case by working with M (61) i.e., a M nonlinearity is applied to the activations.


Response to reviewers for the paper: " On Lazy Training in Differentiable Programming "

Neural Information Processing Systems

We thank the reviewers for their comments and suggestions. Hereafter, we list reviewers' (sometimes paraphrased) Each answer will translate into a clarification in the final version. Reviewer #2 and #3 felt that our message was lacking clarity. It seems that this paper considers both empirical loss and population loss . The authors should provide analysis about the generalization behavior about two-layer neural networks .


Reviews: On Lazy Training in Differentiable Programming

Neural Information Processing Systems

The paper provided some interesting understanding, but is not significant enough to explain interesting issues in deep learning. The paper showed that lazy training can be caused by parameter scaling, not special to overparameterization of neural networks. What does this tell us about the overparameterized neural networks? Does this result imply that lazy regime of overparameterized neural networks is necessarily due to parameter scaling? If not, lazy regime of overparameterized neural networks cannot be explained simply by parameter scaling.


Review for NeurIPS paper: NeuMiss networks: differentiable programming for supervised learning with missing values.

Neural Information Processing Systems

Summary and Contributions: The paper derives analytical expressions of optimal predictors in the presence of Missing Completely At Random (MCAR), Missing At Random (MAR) and self-masking missingness in the linear Gaussian case. Then, the paper proposes Neumann Network for learning the optimal predictor in the MAR case and show the insights and connection to the neural network with ReLU activations. There are two challenges of learning the optimal predicator from data containing missing values: 1) computing the inversion of covariance matrices in the MAR optimal predicator; 2) 2 d optimal predictors with different missingness patterns required to learn the optimal predictor, where d is the number of features/covariates. For the first one, the paper provides a theoretical analysis, which is approximated in a recursive manner with the convergence and upper bounder guarantee. For the second one, the Neumann Network shares the weights of optimal predictors with different missing patterns, which turns out empirically more data efficient and robust to self-masking missingness cases.


Review for NeurIPS paper: NeuMiss networks: differentiable programming for supervised learning with missing values.

Neural Information Processing Systems

The paper attacks the classical problem of linear regression with missing values. It computes the Bayes predictor in several cases with missing values and then uses Neumann series to approximate the Bayes predictor. This approximation is then used to design Neural Networks with RelU functions. The propositions describing self-masking missingness, appears to be a novel concept, are interesting but can be considered slightly restrictive because of Linear Gaussian assumptions. However, both the results and the methods should be of interest to NeuriPS 2020 community.


On Lazy Training in Differentiable Programming

Neural Information Processing Systems

In a series of recent theoretical works, it was shown that strongly over-parameterized neural networks trained with gradient-based methods could converge exponentially fast to zero training loss, with their parameters hardly varying. In this work, we show that this lazy training'' phenomenon is not specific to over-parameterized neural networks, and is due to a choice of scaling, often implicit, that makes the model behave as its linearization around the initialization, thus yielding a model equivalent to learning with positive-definite kernels. Through a theoretical analysis, we exhibit various situations where this phenomenon arises in non-convex optimization and we provide bounds on the distance between the lazy and linearized optimization paths. Our numerical experiments bring a critical note, as we observe that the performance of commonly used non-linear deep convolutional neural networks in computer vision degrades when trained in the lazy regime. This makes it unlikely thatlazy training'' is behind the many successes of neural networks in difficult high dimensional tasks.


NeuMiss networks: differentiable programming for supervised learning with missing values.

Neural Information Processing Systems

The presence of missing values makes supervised learning much more challenging. Indeed, previous work has shown that even when the response is a linear function of the complete data, the optimal predictor is a complex function of the observed entries and the missingness indicator. As a result, the computational or sample complexities of consistent approaches depend on the number of missing patterns, which can be exponential in the number of dimensions. In this work, we derive the analytical form of the optimal predictor under a linearity assumption and various missing data mechanisms including Missing at Random (MAR) and self-masking (Missing Not At Random). Based on a Neumann-series approximation of the optimal predictor, we propose a new principled architecture, named NeuMiss networks.